On Distributed Nonconvex Optimization: Projected Subgradient Method for Weakly Convex Problems in Networks

نویسندگان

چکیده

The stochastic subgradient method is a widely used algorithm for solving large-scale optimization problems arising in machine learning. Often, these are neither smooth nor convex. Recently, Davis et al. , 2018 characterized the convergence of weakly convex case, which encompasses many important applications (e.g., robust phase retrieval, blind deconvolution, biconvex compressive sensing, and dictionary learning). In practice, distributed implementations projected (stoDPSM) to speed up risk minimization. this article, we propose implementation with theoretical guarantee. Specifically, show global stoDPSM using Moreau envelope stationarity measure. Furthermore, under so-called sharpness condition, that deterministic DPSM (with proper initialization) converges linearly sharp minima, geometrically diminishing step size. We provide numerical experiments support our analysis.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the projected subgradient method for nonsmooth convex optimization in a Hilbert space

We consider the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to an ek-subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. The normalized stepsizes ek are exogenously given, satisfying ~ = 0 c~k ec, ~ = 0 c~ < ec, and ek is chosen so that ek ~ 0. We prove t...

متن کامل

Mirror descent and nonlinear projected subgradient methods for convex optimization

The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an e3ciency estimate that is mildly dependent in the decision variables dimension, and thus suitable for solving very large scale optimization problems. We present a new derivation and analysis of this algorithm. We show that the MDA can be viewed as a nonline...

متن کامل

Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems

In this paper, we introduce a stochastic projected subgradient method for weakly convex (i.e., uniformly prox-regular) nonsmooth, nonconvex functions—a wide class of functions which includes the additive and convex composite classes. At a high-level, the method is an inexact proximal point iteration in which the strongly convex proximal subproblems are quickly solved with a specialized stochast...

متن کامل

Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization

We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradi...

متن کامل

Inexact subgradient methods for quasi-convex optimization problems

In this paper, we consider a generic inexact subgradient algorithm to solve a nondifferentiable quasi-convex constrained optimization problem. The inexactness stems from computation errors and noise, which come from practical considerations and applications. Assuming that the computational errors and noise are deterministic and bounded, we study the effect of the inexactness on the subgradient ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Automatic Control

سال: 2022

ISSN: ['0018-9286', '1558-2523', '2334-3303']

DOI: https://doi.org/10.1109/tac.2021.3056535